MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.
translated by 谷歌翻译
当由于数据隐私或传输限制而无法共享来自不同来源的数据时,常规的集中式深度学习范例是不可行的。为了解决这个问题,已经引入了联合学习,以通过非共享数据跨多个来源(客户)转移知识,同时优化了全球概括的中央模型(服务器)。现有的联合学习范式主要集中于在模型中转移整体高级知识(例如类),这些知识与感兴趣的特定对象密切相关,因此可能会遭受反向攻击。相比之下,在这项工作中,我们考虑转移对感兴趣的特定对象不敏感的中级语义知识(例如属性),因此更具有隐私性和可扩展性。为此,我们制定了一个新的联合零局学习(FZSL)范式,以通过非共享本地数据学习中级语义知识,并累积了全球概括的部署中心模型。为了提高模型判别能力,我们建议探索从外部知识中探索语义知识的增强,以丰富FZSL中的中级语义空间。对五个Zeroshot学习基准数据集进行的广泛实验验证了我们通过中级语义知识转移优化可通用联合学习模型的方法的有效性。
translated by 谷歌翻译
变压器在计算机视觉任务中表现出很大的潜力。常见的信念是他们的注意力令牌混合器模块对他们的能力做出了贡献。但是,最近的作品显示了变压器中的基于关注的模块可以被空间MLP所取代,由此产生的模型仍然表现得很好。基于该观察,我们假设变压器的一般架构,而不是特定的令牌混音器模块对模型的性能更为必要。为了验证这一点,我们刻意用尴尬的简单空间池汇集操作员取代变压器中的注意模块,以仅进行最基本的令牌混合。令人惊讶的是,我们观察到,派生模型称为池,在多台计算机视觉任务上实现了竞争性能。例如,在ImageNet-1K上,泳池制造器实现了82.1%的前1个精度,超越了调节的视觉变压器/ MLP样基线Deit-B / ResmmP-B24,比参数的35%/ 52%的准确度为0.3%/ 1.1%和48%/ 60%的Mac。泳道的有效性验证了我们的假设,并敦促我们启动“MetaFormer”的概念,这是一个从变压器抽象的一般架构,而无需指定令牌混音器。基于广泛的实验,我们认为MetaFormer是在视觉任务上实现最近变压器和MLP样模型的优越结果的关键球员。这项工作要求更具未来的研究,专门用于改善元形器,而不是专注于令牌混音器模块。此外,我们提出的池更换器可以作为未来的MetaFormer架构设计的起始基线。代码可在https://github.com/sail-sg/poolformer使用
translated by 谷歌翻译
基于骨架的动作识别广泛用于各种区域,例如监视和人机相互作用。现有模型主要以监督方式学习,从而根据标签昂贵时可能是不可行的大规模标记数据。在本文中,我们提出了一种新的对比度重建表示学习网络(CRRL),其同时为无监督的基于骨架的动作识别捕获姿势和运动动力学。它主要由三部分组成:序列重建器,对比运动学习者和信息定影器。序列重建者通过重建学习从骨架坐标序列的表示,因此学习的表示倾向于聚焦在琐碎的姿势坐标上并且在运动学习中犹豫不决。为了增强运动的学习,对比运动学习者分别在从坐标序列和附加速度序列中学到的表示之间进行对比学习。最后,在信息定位器中,我们探讨了将序列重建器和对比运动学习者结合的各种策略,并建议通过基于知识蒸馏的融合策略同时捕获姿势和动作,从而将动作学习从对比运动学习者转移到序列中的序列重建者。在若干基准测试中,即NTU RGB + D 60,NTU RGB + D 120,CMU Mocap和NW-UCLA的实验结果证明了所提出的CRRL方法​​的承诺,到目前为止的现有方法。
translated by 谷歌翻译
The past two decades have seen increasingly rapid advances in the field of multi-view representation learning due to it extracting useful information from diverse domains to facilitate the development of multi-view applications. However, the community faces two challenges: i) how to learn robust representations from a large amount of unlabeled data to against noise or incomplete views setting, and ii) how to balance view consistency and complementary for various downstream tasks. To this end, we utilize a deep fusion network to fuse view-specific representations into the view-common representation, extracting high-level semantics for obtaining robust representation. In addition, we employ a clustering task to guide the fusion network to prevent it from leading to trivial solutions. For balancing consistency and complementary, then, we design an asymmetrical contrastive strategy that aligns the view-common representation and each view-specific representation. These modules are incorporated into a unified method known as CLustering-guided cOntrastiVE fusioN (CLOVEN). We quantitatively and qualitatively evaluate the proposed method on five datasets, demonstrating that CLOVEN outperforms 11 competitive multi-view learning methods in clustering and classification. In the incomplete view scenario, our proposed method resists noise interference better than those of our competitors. Furthermore, the visualization analysis shows that CLOVEN can preserve the intrinsic structure of view-specific representation while also improving the compactness of view-commom representation. Our source code will be available soon at https://github.com/guanzhou-ke/cloven.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译
Self-supervised representation learning follows a paradigm of withholding some part of the data and tasking the network to predict it from the remaining part. Towards this end, masking has emerged as a generic and powerful tool where content is withheld along the sequential dimension, e.g., spatial in images, temporal in audio, and syntactic in language. In this paper, we explore the orthogonal channel dimension for generic data augmentation. The data for each channel is quantized through a non-uniform quantizer, with the quantized value sampled randomly within randomly sampled quantization bins. From another perspective, quantization is analogous to channel-wise masking, as it removes the information within each bin, but preserves the information across bins. We apply the randomized quantization in conjunction with sequential augmentations on self-supervised contrastive models. This generic approach achieves results on par with modality-specific augmentation on vision tasks, and state-of-the-art results on 3D point clouds as well as on audio. We also demonstrate this method to be applicable for augmenting intermediate embeddings in a deep neural network on the comprehensive DABS benchmark which is comprised of various data modalities. Code is availabel at http://www.github.com/microsoft/random_quantize.
translated by 谷歌翻译
User and product information associated with a review is useful for sentiment polarity prediction. Typical approaches incorporating such information focus on modeling users and products as implicitly learned representation vectors. Most do not exploit the potential of historical reviews, or those that currently do require unnecessary modifications to model architecture or do not make full use of user/product associations. The contribution of this work is twofold: i) a method to explicitly employ historical reviews belonging to the same user/product to initialize representations, and ii) efficient incorporation of textual associations between users and products via a user-product cross-context module. Experiments on IMDb, Yelp-2013 and Yelp-2014 benchmarks show that our approach substantially outperforms previous state-of-the-art. Since we employ BERT-base as the encoder, we additionally provide experiments in which our approach performs well with Span-BERT and Longformer. Furthermore, experiments where the reviews of each user/product in the training data are downsampled demonstrate the effectiveness of our approach under a low-resource setting.
translated by 谷歌翻译
In this work, we propose an ID-preserving talking head generation framework, which advances previous methods in two aspects. First, as opposed to interpolating from sparse flow, we claim that dense landmarks are crucial to achieving accurate geometry-aware flow fields. Second, inspired by face-swapping methods, we adaptively fuse the source identity during synthesis, so that the network better preserves the key characteristics of the image portrait. Although the proposed model surpasses prior generation fidelity on established benchmarks, to further make the talking head generation qualified for real usage, personalized fine-tuning is usually needed. However, this process is rather computationally demanding that is unaffordable to standard users. To solve this, we propose a fast adaptation model using a meta-learning approach. The learned model can be adapted to a high-quality personalized model as fast as 30 seconds. Last but not the least, a spatial-temporal enhancement module is proposed to improve the fine details while ensuring temporal coherency. Extensive experiments prove the significant superiority of our approach over the state of the arts in both one-shot and personalized settings.
translated by 谷歌翻译
Monocular depth estimation is a challenging problem on which deep neural networks have demonstrated great potential. However, depth maps predicted by existing deep models usually lack fine-grained details due to the convolution operations and the down-samplings in networks. We find that increasing input resolution is helpful to preserve more local details while the estimation at low resolution is more accurate globally. Therefore, we propose a novel depth map fusion module to combine the advantages of estimations with multi-resolution inputs. Instead of merging the low- and high-resolution estimations equally, we adopt the core idea of Poisson fusion, trying to implant the gradient domain of high-resolution depth into the low-resolution depth. While classic Poisson fusion requires a fusion mask as supervision, we propose a self-supervised framework based on guided image filtering. We demonstrate that this gradient-based composition performs much better at noisy immunity, compared with the state-of-the-art depth map fusion method. Our lightweight depth fusion is one-shot and runs in real-time, making our method 80X faster than a state-of-the-art depth fusion method. Quantitative evaluations demonstrate that the proposed method can be integrated into many fully convolutional monocular depth estimation backbones with a significant performance boost, leading to state-of-the-art results of detail enhancement on depth maps.
translated by 谷歌翻译